Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make _stable_1d_sort(nb) optional #196

Closed
wants to merge 2 commits into from
Closed

Conversation

carmocca
Copy link
Contributor

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • [n/a] Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Fixes #195

@carmocca carmocca added the bug / fix Something isn't working label Apr 22, 2021
@codecov
Copy link

codecov bot commented Apr 22, 2021

Codecov Report

Merging #196 (25c8a2d) into master (d3058c4) will decrease coverage by 16.76%.
The diff coverage is 75.00%.

Impacted file tree graph

@@             Coverage Diff             @@
##           master     #196       +/-   ##
===========================================
- Coverage   96.63%   79.86%   -16.77%     
===========================================
  Files         180       90       -90     
  Lines        5760     2871     -2889     
===========================================
- Hits         5566     2293     -3273     
- Misses        194      578      +384     
Flag Coverage Δ
Linux 79.86% <75.00%> (-0.07%) ⬇️
Windows 79.86% <75.00%> (-0.07%) ⬇️
cpu 79.86% <75.00%> (-16.77%) ⬇️
gpu ?
macOS 79.86% <75.00%> (-16.77%) ⬇️
pytest 79.86% <75.00%> (-16.77%) ⬇️
python3.6 ?
python3.8 ?
python3.9 ?
torch1.3.1 ?
torch1.4.0 ?
torch1.8.1 ?

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
torchmetrics/utilities/data.py 82.08% <75.00%> (-13.37%) ⬇️
torchmetrics/utilities/distributed.py 22.85% <0.00%> (-74.29%) ⬇️
torchmetrics/classification/auc.py 47.61% <0.00%> (-52.39%) ⬇️
torchmetrics/functional/classification/auroc.py 46.15% <0.00%> (-40.01%) ⬇️
torchmetrics/metric.py 56.17% <0.00%> (-39.05%) ⬇️
torchmetrics/functional/regression/psnr.py 60.60% <0.00%> (-36.37%) ⬇️
torchmetrics/functional/classification/accuracy.py 57.57% <0.00%> (-36.37%) ⬇️
...chmetrics/functional/classification/stat_scores.py 66.66% <0.00%> (-33.34%) ⬇️
...rics/functional/classification/confusion_matrix.py 66.66% <0.00%> (-33.34%) ⬇️
torchmetrics/utilities/checks.py 60.54% <0.00%> (-31.90%) ⬇️
... and 133 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d3058c4...25c8a2d. Read the comment docs.

if n < nb:
x_max = x.max()
x = torch.cat([x, (x_max + 1) * torch.ones(nb - n, dtype=x.dtype, device=x.device)], 0)
if nb is not None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should just remove nb option. I don't see when you'd ever want it to be anything other than 2049. The padding is just a hack to get torch to sort the array in a stable manner (look at the docstring).



@pytest.mark.parametrize("nb", (None, 5, 15))
def test_stable_1d_sort(nb):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in your test, you are actually not testing stability at all, you are just testing if the sort is working at all. Can you try seeing if sorting an array of 8 zeros (and other equal values) produces indices=[0,1,2,3,..7] (so no changes are made by sort, meaning it's stable)

@Borda
Copy link
Member

Borda commented Apr 23, 2021

@carmocca I think this won't be needed after merging #197 (comment)

@Borda Borda mentioned this pull request Apr 23, 2021
4 tasks
@carmocca carmocca closed this Apr 23, 2021
@carmocca carmocca deleted the stable-1d-sort-optional-nb branch April 23, 2021 10:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AUC computation is incorrect for tensors with >2048 elements.
4 participants